Current Research in Neurobiology
○ Elsevier BV
All preprints, ranked by how well they match Current Research in Neurobiology's content profile, based on 14 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Huang, L.; Hardyman, F.; Edwards, M.; Galliano, E.
Show abstract
Activity-dependent neuronal plasticity is crucial for animals to adapt to dynamic sensory environments. Traditionally, research on activity dependent-plasticity has used sensory deprivation approaches in animal models, and it has focused on its effects in primary sensory cortices. However, emerging evidence emphasizes the importance of activity-dependent plasticity both in the sensory organs and in sub-cortical regions where cranial nerves relay information to the brain. Additionally, a critical question arises: do different sensory modalities share common cellular mechanisms for deprivation-induced plasticity at these central entry-points? Furthermore, does the duration of deprivation correlate with specific plasticity mechanisms? This study aims to systematically review and meta-analyse research papers that investigated visual, auditory, or olfactory deprivation in rodents. Specifically, it explores the consequences of sensory deprivation in homologous regions at the first central synapse after the cranial nerve: vision--lateral geniculate nucleus and superior colliculus; audition-- ventral and dorsal cochlear nucleus; olfaction--olfactory bulb. The systematic search yielded 91 research papers (39 vision, 22 audition, 30 olfaction), revealing significant heterogeneity in publication trends, experimental methods of inducing deprivation, measures of deprivation-induced plasticity, and reporting, across the three sensory modalities. Nevertheless, despite these methodological differences, commonalities emerged when correlating the plasticity mechanisms with the duration of the sensory deprivation. Following short-term deprivations (up to 1 day) all three systems showed reduced activity levels and increased disinhibition. Medium-term deprivation (1 day to a week) induced greater glial involvement and synaptic remodelling. Long-term deprivation (over a week) predominantly led to macroscopic structural changes including tissue shrinkage and apoptosis. These findings underscore the importance of standardizing methodologies and reporting practices. Additionally, they highlight the value of cross-modals synthesis for understanding how the nervous system, including peripheral, pre-cortical, and cortical areas, respond to and compensate for sensory inputs loss.
Greco, A.; Baek, S.; Rastelli, C.; Siegel, M.; Braun, C.
Show abstract
Spatial hearing allows humans to localize sound sources in the azimuth plane using interaural time (ITD) and level (ILD) differences, but the contribution of additional auditory features remains unclear. To investigate this, we measured human localization performance with natural and artificial stimuli that selectively included or excluded ITD and ILD as primary interaural cues. As expected, human listeners relied synergistically on ITD and ILD for accurate azimuth localization. Moreover, even when both primary cues were absent, localization performance remained above chance level. We compared human performance with state-of-the-art deep neural networks (DNN) optimized for sound localization to investigate possible computational mechanisms underlying this robust performance. In contrast to humans, DNNs demonstrated high accuracy only for stimuli that resembled their training regime but failed when primary interaural cues were absent. This human-DNN misalignment highlights a fundamental distinction in sensory processing strategies, potentially arising from the simplicity bias inherent in DNN training, with human reliance on a wider range of auditory features likely reflecting evolutionary pressures favoring adaptability across diverse acoustic environments. Together, our results demonstrate the robustness of human spatial hearing beyond primary interaural cues and point to promising directions for advancing artificial systems and informing clinical applications, such as cochlear implants and auditory prosthetics.
Gastaldon, S.; Gheller, F.; Bonfiglio, N.; Brotto, D.; Bottari, D.; Trevisi, P.; Martini, A.; Vespignani, F.; Peressotti, F.
Show abstract
This study provides the first neurophysiological evidence of how cochlear implant (CI) input affects predictive processing during audiovisual language comprehension in deaf individuals. Using EEG, we compared 18 CI users with 18 normal-hearing (NH) controls during sentence comprehension where final word predictability was determined by high or low semantic constraint (HC vs. LC) of the preceding sentence frame. Between sentence frame and final word, a 800 ms silent gap was introduced. Mouth visibility was manipulated during sentence frames (visible or digitally occluded; V+ vs. V-), while the final words were always presented with the mouth visible. In NH participants, lower-beta power (12-15 Hz) in left frontal and central sensors decreased for HC vs. LC contexts during the pre-target silent gap, but only when the mouths was visible, suggesting active prediction generation. In CI users, this lower beta power decrease was absent. After final word presentation, both groups showed N400 predictability effects, indicating preserved prediction evaluation. However, CI users exhibited extended N400 effects in the V+ condition, suggesting additional processing demands. Across all participants, pre-target beta modulations correlated with language production abilities, supporting prediction-by-production frameworks. Within CI users, poorer audiometric thresholds correlated with larger N400 constraint effects, possibly indicating greater reliance on contextual prediction to compensate for degraded sensory input. These findings demonstrate that CI-mediated perception alters the neural mechanisms of prediction generation. The link between production skills and predictive mechanisms suggests that strengthening expressive language abilities may enhance predictive processing in CI users.
Kropf, B.; Rao, M. S.; Gutschalk, A.; Andermann, M.; Praetorius, M.; Rupp, A.; Steinmetzger, K.
Show abstract
In case of severe hearing loss early in life or congenital deafness, cochlear implants (CIs) represent the method of choice to restore hearing and enable language acquisition. While speech intelligibility has been shown to improve during the first year after implantation to then reach a plateau, the underlying neuroplastic changes are poorly understood. Here, we longitudinally compared the cortical processing of speech stimuli in a case-control design with two groups of pre-lingually deafened CI users (4.4 vs. 25.8 months of CI experience) and an age-matched control group with normal hearing (NH; mean group ages [~]9 years). In two experiments, participants were presented with running speech and vowel sequences while fNIRS and EEG data were obtained simultaneously. Despite trends in this direction, cortical activity did not increase significantly with more CI experience and did not approach the higher levels observed in the NH controls. However, in the speech experiment, the less experienced CI group showed an abnormal shift of activity to the right hemisphere not observed in the other two groups. These results hence imply that adaptation to CI-based hearing is not characterised by a gradual increase of activity in left-hemispheric language network, but a reduction of abnormal activity elsewhere.
Hladek, L.; Seitz, A. R.; Kopco, N.
Show abstract
The processes of audio-visual integration and of visually-guided re-calibration of auditory distance perception are not well understood. Here, the ventriloquism effect (VE) and aftereffect (VAE) were used to study these processes in a real reverberant environment. Auditory and audio-visual (AV) stimuli were presented, in interleaved trials, over a range of distances from 0.7 to 2.04 m in front of the listener, whose task was to judge the distance of auditory stimuli or of the auditory components of AV stimuli. The relative location of the visual and auditory components of AV stimuli was fixed within a session such that the visual component was presented from distance 30% closer (V-closer) than the auditory component, 30% farther (V-farther), or aligned (V-aligned). The study examined the strength of VE and VAE as a function of the reference distance and of the direction of the visual component displacement, and the temporal profile of the build-up/break-down of these effects. All observed effects were approximately independent of target distance when expressed in logarithmic units. The VE strength, measured in the AV trials, was roughly constant for both directions of visual-component displacement such that, on average, the responses shifted in the direction of the visual component by 72% of the audio-visual disparity. The VAE strength, measured on the interleaved auditory-only trials, was stronger in the V-farther than the V-closer condition (44% vs. 31% of the audio-visual disparity, respectively). The VAE persisted to post-adaptation auditory-only blocks of trials, however it was weaker and the V-farther/V-closer asymmetry was reduced. The rates of build-up/break-down of the VAE were also asymmetrical, with slower adaptation in the V-closer condition. These results suggest that, on a logarithmic scale, the AV distance integration is symmetrical, independent of the direction of induced shift, while the visually-induced auditory distance re-callibration is asymmetrical, stronger and faster when evoked by more distant visual stimuli.
Cunningham, E.; Brang, D.
Show abstract
Sounds elicit rapid responses in human visual cortex. Anatomical work in nonhuman primates suggests that these responses may be enabled by monosynaptic, corticocortical projections between auditory and visual areas. Such projections would not only provide routes for rapid modulation of visual processing in the sighted, but also avenues for cortical adaptation following vision loss. However, there is little available information on the presence and organization of such projections in humans. Here, we address this question by examining intracranial responses to direct electrical stimulation of the superior temporal gyrus (STG) in 23 patients (both male and female). Mid- and posterior-STG stimulation produced rapid responses in early visual cortex (18ms+), with a distribution that favored lateral occipital cortex and anterior calcarine regions near peripheral V1. Early visual cortex (V1/V2) responded most strongly to stimulation over mid-STG, whereas lateral occipital cortex (near V5/hMT+) exhibited the most robust response to posterior STG stimulation. These data demonstrate that communication from auditory to visual cortex can occur in humans at latencies that are compatible with corticocortical, potentially monosynaptic, transmission. Responses in visual cortex were organized in a manner similar to that of nonhuman primates, with preliminary evidence for an exception over the occipital pole. Overall, the results provide support for theories that human visual responses to sound are inherited, in part, from auditory cortex.
Ciesla, K.; Wolak, T.; Amedi, A.
Show abstract
Since childhood, we experience speech as a combination of audio and visual signals, with visual cues particularly beneficial in difficult auditory conditions. This study investigates an alternative multisensory context of speech, and namely audio-tactile, which could prove beneficial for rehabilitation in the hearing impaired population. We show improved understanding of distorted speech in background noise, when combined with low-frequency speech-extracted vibrotactile stimulation delivered on fingertips. The quick effect might be related to the fact that both auditory and tactile signals contain the same type of information. Changes in functional connectivity due to audio-tactile speech training are primarily observed in the visual system, including early visual regions, lateral occipital cortex, middle temporal motion area, and the extrastriate body area. These effects, despite lack of visual input during the task, possibly reflect automatic involvement of areas supporting lip-reading and spatial aspects of language, such as gesture observation, in difficult acoustic conditions. For audio-tactile integration we show increased connectivity of a sensorimotor hub representing the entire body, with the parietal system of motor planning based on multisensory inputs, along with several visual areas. After training, the sensorimotor connectivity increases with high-order and language-related frontal and temporal regions. Overall, the results suggest that the new audio-tactile speech task activates regions that partially overlap with the established brain network for audio-visual speech processing. This further indicates that neuronal plasticity related to perceptual learning is first built upon an existing structural and functional blueprint for connectivity. Further effects reflect task-specific behaviour related to body and spatial perception, as well as tactile signal processing. Possibly, a longer training regime is required to strengthen direct pathways between the auditory and sensorimotor brain regions during audio-tactile speech processing.
Bounds, H. A.; Quintana, D.; Brown, J. A.; Wang, M.; Bhatla, N.; Wiegert, J. S.; Adesnik, H.
Show abstract
Recent work has demonstrated that both permanent lesions and acute inactivation experiments can lead to erroneous conclusions about the causal role of brain areas in specific behaviors, casting serious doubt on major avenues by which neuroscientists study the brain. To overcome this challenge, we developed a three-stage optogenetic approach which leverages the ability to precisely control the temporal period of regional inactivation with either brief or sustained illumination, enabling investigators to dissociate between putative permissive and instructive roles of brain areas in behavior. We applied this approach to the mouse primary visual cortex (V1) to probe whether V1 is permissive or instructive for the detection low contrast stimuli. Acute inactivation of V1 drastically suppressed performance, but during persistent inactivation, the animals contrast detection recovered to pre-silencing levels. This recovery was itself reversible, as returning the animals to intermittent V1 inactivation reinstated the behavioral deficit. These results argue that V1 is the default circuit mice use to detect visual stimuli, but in its absence, other regions can compensate for it. This novel, temporally controllable optogenetic perturbation paradigm should be useful in other brain circuits to assess whether they are instructive or permissive in a brain function or behavior.
Zhang, Y.; Wang, X.; Zhu, L.; Bai, S.; Li, R.; Sun, H.; Qi, R.; Cai, R.; Li, M.; Jia, G.; Schriver, K. E.; Li, X.; Gao, L.
Show abstract
Cortical feedback has long been considered crucial for modulation of sensory processing. In the mammalian auditory system, studies have suggested that corticofugal feedback can have excitatory, inhibitory, or both effects on the response of subcortical neurons, leading to controversies regarding the role of corticothalamic influence. This has been further complicated by studies conducted under different brain states. In the current study, we used cryo-inactivation in the primary auditory cortex (A1) to examine the role of corticothalamic feedback on medial geniculate body (MGB) neurons in awake marmosets. The primary effects of A1 inactivation were a frequency-specific decrease in the auditory response of MGB neurons coupled with an increased spontaneous firing rate, which together resulted in a decrease in the signal-to-noise ratio. In addition, we report for the first-time that A1 robustly modulated the long-lasting sustained response of MGB neurons which changed the frequency tuning after A1 inactivation, e.g., neurons with sharp tuning increased tuning bandwidth whereas those with broad tuning decreased tuning bandwidth. Taken together, our results demonstrate that corticothalamic modulation in awake marmosets serves to enhance sensory processing in a way similar to center-surround models proposed in visual and somatosensory systems, a finding which supports common principles of corticothalamic processing across sensory systems.
Fang, S.; Fleiner, T.; Peng, F.; Buchholz, S.; Zeeshan, M.; Rosskothen-Kuhl, N.; Schnupp, J.
Show abstract
Cochlear implants (CIs) have successfully restored hearing in more than one million patients with severe to profound hearing loss worldwide. While CIs effectively restore speech perception in quiet environments, sound localization remains challenging for bilateral CI users, particularly their ability to utilize interaural time differences (ITDs). The majority of clinical CI processors use a coding strategy that encodes ITD information only in the envelope of electrical pulse trains rather than their pulse timing, which may contribute to the poorer spatial hearing perception of CI users. We recently demonstrated in a behavioral study on early deafened, bilaterally CI-implanted rats that pulse timing ITDs completely dominate ITD perception, while sensitivity to envelope ITDs is almost negligible in comparison. Building on this, we here investigated the neurophysiological sensitivity of the inferior colliculus (IC) to envelope and pulse timing ITDs at two different pulse rates (900 and 4500 pulses/s) and three different stimulation modulations (5, 20 and 100 Hz) in CI rats with different hearing experiences. Our results indicate that IC neurons exhibit far greater sensitivity to pulse timing ITD than envelope ITD independent of pulse rate, modulation rate or hearing experience. These findings suggest that to improve binaural hearing outcome in bilateral CI users, clinical stimulation strategies should provide informative pulse timing ITDs. Graphical abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=131 SRC="FIGDIR/small/675523v1_ufig1.gif" ALT="Figure 1"> View larger version (44K): org.highwire.dtl.DTLVardef@1874d3borg.highwire.dtl.DTLVardef@1df79feorg.highwire.dtl.DTLVardef@c244e5org.highwire.dtl.DTLVardef@39220f_HPS_FORMAT_FIGEXP M_FIG C_FIG
Jendrichovsky, P.; Lee, H.-K.; Kanold, P. O.
Show abstract
Plastic changes in the brain are primarily limited to early postnatal periods. Recovery of adult brain plasticity is critical for the effective development of therapies. A brief (1-2 week) duration of visual deprivation (dark exposure, DE) in adult mice can trigger functional plasticity of thalamocortical and intracortical circuits in the primary auditory cortex suggesting improved sound processing. We tested if DE enhances the ability of adult mice to detect sounds. We trained and continuously evaluated the behavioral performance of mice in control and DE conditions using automated home-cage training. Consistent with age-related peripheral hearing loss present in C57BL/6J mice, we observed decreased performance for high-frequency sounds with age, which was reduced by DE. In CBA mice with preserved peripheral hearing, we also found that DE enhanced auditory performance in low and mid frequencies over time compared to the control.
Valzolgher, C.; Verdelet, G.; Salemme, R.; Lombardi, L.; Gaveau, V.; Farne, A.; Pavani, F.
Show abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears and initial head-position with coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. This is particularly important for individuals who experience long-term auditory alterations (e.g., hearing loss, hearing aids, cochlear implants) as well as individuals who have to adapt to novel auditory cues when listening in virtual auditory environments. Until now, several methodological constraints have limited our understanding of the mechanisms involved in spatial hearing re-learning. In particular, the potential role of active listening and head-movements have remained largely overlooked. Here, we overcome these limitations by using a novel methodology, based on virtual reality and real-time kinematic tracking, to study the role of active multisensory-motor interactions with sounds in the updating of sound-space correspondences. Participants were immersed in a virtual reality scenario showing 17 speakers at ear-level. From each visible speaker a free-field real sound could be generated. Two separate groups of participants localised the sound source either by reaching or naming the perceived sound source, under binaural or monaural listening. Participants were free to move their head during the task and received audio-visual feedback on their performance. Results showed that both groups compensated rapidly for the short-term auditory alteration caused by monaural listening, improving sound localisation performance across trials. Crucially, compared to naming, reaching the sounds induced faster and larger sound localisation improvements. Furthermore, more accurate sound localisation was accompanied by progressively wider head-movements. These two measures were significantly correlated selectively for the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for updating altered spatial hearing. Head movements played an important role in this fast updating, pointing to the importance of active listening when implementing training protocols for improving spatial hearing. HIGHLIGHTS- We studied spatial hearing re-learning using virtual reality and kinematic tracking - Audio-visual feedback combined with active listening improved monaural sound localisation - Reaching to sounds improved performance more than naming sounds - Monaural listening triggered compensatory head-movement behaviour - Head-movement behaviour correlated with re-learning only when reaching to sounds
Schenberg, L.; Palou, A.; Simon, F.; Bonnard, T.; Barton, C.-E.; Fricker, D.; Tagliabue, M.; Llorens, J.; Beraneck, M.
Show abstract
The functional complementarity of the vestibulo-ocular reflex (VOR) and optokinetic reflex (OKR) allows for optimal combined gaze stabilization responses (CGR) in light. While sensory substitution has been reported following complete vestibular loss, the capacity of the central vestibular system to compensate for partial peripheral vestibular loss remains to be determined. Here, we first demonstrate the efficacy of a 6-week subchronic ototoxic protocol in inducing transient and partial vestibular loss which equally affects the canal- and otolith-dependent VORs. Immunostaining of hair cells in the vestibular sensory epithelia revealed that organ-specific alteration of type I, but not type II, hair cells correlates with functional impairments. The decrease in VOR performance is paralleled with an increase in the gain of the OKR occurring in a specific range of frequencies where VOR normally dominates gaze stabilization, compatible with a sensory substitution process. Comparison of unimodal OKR or VOR versus bimodal CGR revealed that visuo-vestibular interactions remain reduced despite a significant recovery in the VOR. Modeling and sweep-based analysis revealed that the differential capacity to optimally combine OKR and VOR correlates with the reproducibility of the VOR responses. Overall, these results shed light on the multisensory reweighting occurring in pathologies with fluctuating peripheral vestibular malfunction.
Haider, C. L.; Suess, N.; Hauswald, A.; Park, H.; Weisz, N.
Show abstract
Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In the context of speech, when confronted with a degraded acoustic signal, congruent visual inputs promote comprehension. When this input is occluded speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels of speech processing are affected under which circumstances by occlusion of the mouth area. To answer this question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. In half of the trials, the target speaker wore a (surgical) face mask, while we measured the brain activity of normal hearing participants via magnetoencephalography (MEG). We additionally added a distractor speaker in half of the trials in order to create an ecologic difficult listening situation. A decoding model on the clear AV speech was trained and used to reconstruct crucial speech features in each condition. We found significant main effects of face masks on the reconstruction of acoustic features, such as the speech envelope and spectral speech features (i.e. pitch and formant frequencies), while reconstruction of higher level features of speech segmentation (phoneme and word onsets) were especially impaired through masks in difficult listening situations. As we used surgical face masks in our study, which only show mild effects on speech acoustics, we interpret our findings as the result of the occluded lip movements. This idea is in line with recent research showing that visual cortical regions track spectral modulations. Our findings extend previous behavioural results, by demonstrating the complex contextual effects of occluding relevant visual information on speech processing. HighlightsO_LISurgical face masks impair neural tracking of speech features C_LIO_LITracking of acoustic features is generally impaired, while higher level segmentational features show their effects especially in challenging listening situations C_LIO_LIAn explanation is the prevention of a visuo-phonological transformation contributing to audiovisual multisensory integration C_LI
Fivel, L.; Brunelin, J.; Leroux, G.; Haesebaert, F.; Mondino, M.
Show abstract
Auditory externalization, the perception of a sound source as located outside the head, is essential for spatial hearing and auditory scene analysis. However, its neural correlates remain poorly understood. This study investigated differences in brain activation elicited by externalized versus internalized sound sources. Twenty-nine healthy participants underwent a 3T functional magnetic resonance imaging (fMRI) scan while listening to auditory stimuli presented in three spatialization conditions: reverberant externalized sounds (highest externalization), anechoic externalized sounds (intermediate externalization) and diotic anechoic sounds (internalized). Whole-brain analyses revealed greater activation for externalized compared to internalized sound sources in the left superior temporal gyrus, including the planum temporale, the cerebellum and the left posterior cingulate gyrus. Internalized sounds elicited greater relative activity in the left inferior temporal gyrus. Direct comparison between the two externalized conditions revealed stronger left superior temporal gyrus activation for reverberant sounds, while anechoic sounds preferentially activated the right middle temporal gyrus. These findings confirmed the key role of the planum temporale in auditory externalization and the involvement of higher-order brain regions, suggesting broader networks underpinning the perception of sound location.
Ginnan, G.; Rampp, S.; Schaette, R.; Buchfelder, M.; Mueller-Voggel, N.
Show abstract
Subjective tinnitus describes the experience of hearing phantom sounds (e.g., tones, buzzing, noise). While the majority of those who report having experienced phantom sounds claim that these percepts have not lasted but are transient, some experience this chronically, with others even describing their tinnitus as severe enough to negatively impact their well-being and daily lives. Currently, no permanent solution has been discovered for preventing or curing tinnitus. This is due to several factors, including an insufficient understanding of the mechanisms at play that give rise to such auditory sensations, as well as a lack of research investigating the corresponding changes in neural activity associated with the onset and development of tinnitus. Taking advantage of the high spatial and temporal resolution of magnetoencephalography (MEG), we measured cortical activity associated with the development of acute tinnitus-like percepts induced via unilateral auditory deprivation. Over the course of four days, participants continuously wore a silicone earplug in one ear, which led to the experience of phantom sounds in 15 of 16 participants. Frequency analysis of source-localized continuous MEG data revealed a significant increase of gamma power in primary auditory cortices (A1) during the tinnitus condition (p=0.02), which most likely reflects the neuronal processing correlated with tinnitus perception.
O'Connor, S. A.; Narain, P.; Mahajan, A.; Bancroft, G. L.; Haas, H. A.; Wallen-Friedman, E.; Vasisht, S.; Takano, H.; Kiffer, F. C.; Eisch, A. J.; Yun, S.
Show abstract
Environmental stressors rarely affect just one brain circuit. Most studies assess single cognitive endpoints, obscuring whether vulnerabilities are global or circuit-selective and how effects distribute across interconnected systems. To address this, we used galactic cosmic radiation (GCR), a Mars mission-relevant stressor that disrupts the hippocampal-nucleus accumbens-prefrontal circuit. C57BL/6J mice received 33-ion GCR simulation (33-GCR, 0.75 Gy) or sham radiation with the Nrf2-activating compound CDDO-EA or vehicle, followed by multi-domain behavioral testing in both sexes. Under very high memory load, male Veh/33-GCR mice showed enhanced pattern separation compared to Veh/Sham males, an effect normalized by CDDO-EA. Female mice showed no radiation-induced changes in pattern separation but weighed 9-18% more than Veh/Sham females and had reduced locomotor activity. Reward-based learning differed by sex: males showed no changes, while female Veh/33-GCR mice displayed enhanced reward anticipation that was further increased by CDDO-EA alone, with both treatments contributing to elevated goal-tracking. For behavioral flexibility, CDDO-EA impaired reversal learning in males regardless of radiation, while 33-GCR impaired reversal learning in females regardless of CDDO-EA. Principal component analysis revealed that treatments disrupted specific circuit relationships while leaving others intact, consistent with selective rather than global cognitive effects. Fiber photometry showed enhanced dentate gyrus encoding activity in irradiated males under high memory load. Combined CDDO-EA/33-GCR selectively reduced dentate gyrus progenitors in females. Males and females showed distinct, circuit-selective vulnerability patterns, demonstrating that multi-domain, both-sex assessment is necessary to capture how stressors and interventions affect integrated brain function. CDDO-EA proved to be a double-edged sword: protecting one cognitive domain while impairing another, a trade-off invisible to single-endpoint assessment. This framework has immediate relevance for astronaut risk assessment and extends to any context where neuroprotective interventions are evaluated against environmental stressors.
Liu, C.; Yu, N.; Fang, S.; Ding, D.-x.; Qin, H.-d.; Yuan, S.-l.; Lv, P.
Show abstract
Cochlear implants (CIs) are by far the optimal option to partially restore hearing for the patients of sensorineural hearing impairment (HI) by electrically stimulating spiral ganglion neurons (SGNs). However, wide current spread from each electrode constitute an interface which restricts precision and quality of the electrical CIs. Recently, optogenetic stimulation of the cochlea has been proved as a more optimized approach via adeno-associated virus (AAV) carrying the gene encoding the light-sensitive channelrhodopsin-2. Here, we focus on summarizing recent work on stable and accurate ChR2 expression and compare the electrophysiological recording of optogenetic and acoustic stimulation in adult guinea pigs. Light stimulation generated auditory responses that was similar to that of acoustic stimulation. Moreover, normal hearing adult guinea pigs responded with a rise in amplitudes with increasing light intensity. In conclusion, optogenetic cochlear stimulation achieved good spectral selectivity of artificial sound encoding in a new adult rodent model, suggesting that the capabilities of optogenetics might be applied to improve cochlear implants in the future.
Packheiser, J.; Soyman, E.; Paradiso, E.; Ramaaker, E.; Sahin, N.; Muralidharan, S.; Wohr, M.; Gazzola, V.; Keysers, C.
Show abstract
Emotional contagion refers to the transmission of emotions from one conspecific to another. Previous research in rodents has demonstrated that the self-experience of footshocks enhances how much an observer is affected by the emotional state of a conspecific in pain or distress. We hypothesized auditory auto-conditioning to contribute to this enhancement: during the observers own experience of shocks, the animal associates its own audible nocifensive responses, i.e. its pain squeaks, with the negative affective state induced by the shock. When the animal later witnesses a cage mate receive shocks and hears it squeak, the previously strengthened connection between fear and squeaks could be a mechanism eliciting the enhanced fearful response in the observer. As hypothesized, in a first study, we found pre-exposure to shocks to increase freezing and 22 kHz vocalizations associated with distress upon the playback of pain squeaks. Freezing was also increased during the playbacks of phase-scrambled squeaks, but 22 kHz calls were more frequent during the playback of regular squeaks. Core to the notion of auto-conditioning is that the effect of pre-exposure is due to the pairing of a pain-state with hearing ones own pain squeaks. In a second study, we therefore compared the response to squeak playbacks after animals had been pre-exposed to pairings of a CO2 laser with a squeak playback against three control groups that were pre-exposed to the CO2 laser alone, to squeak playbacks alone or to neither of these conditions. We however could not find any differences in freezing or 22 kHz calls among all experimental groups. In summary, we demonstrate the sufficiency of pain squeaks to trigger fear in a way that critically depends on the nature of an animals prior experience and discuss why the pairing of a CO2 laser with pain squeaks cannot substitute footshock pre-exposure.
Teratani-Ota, Y.; Wiltgen, B. J.
Show abstract
The hippocampus is thought to combine "what" and "where" information from the cortex so that objects and events can be represented within the spatial context in which they occur. Surprisingly then, these distinct types of information remain partially segregated in the output region of the hippocampus, area CA1. In this region, objects preferentially activate neurons in the distal segment (adjacent to the subiculum) while spatial locations are precisely represented by neurons in the proximal segment (adjacent to CA2). This difference likely results from distinct anatomical connections; proximal CA1 receives direct input from the medial entorhinal cortex (which encodes spatial context) whereas distal CA1 has reciprocal connections with the lateral entorhinal cortex (which encodes objects and events). Based on these findings, it has been proposed that CA1 contains two distinct representations; one that encodes the animals spatial location and another that encodes objects that are present in the environment. The current study aimed to determine the role of distal CA1 in learning the location of objects in an environment. To do this, we first examined c-Fos expression in proximal and distal CA1 to see if we could replicate previous findings and confirm that neurons in these distinct segment are responsive to different stimuli. As previous studies indicate that catecholamines can regulate the activity of segments of CA1, we then investigate the role of catecholamines on learning object locations using 6-OHDA or SCH23390 to lesion catecholaminergic input and block D1/D5 receptors, respectively. Finally, we monitored calcium activity with fiber photometry while animals performed a hippocampal-dependent object location memory task.